Variance Reduction for Matrix Computations with Applications to Gaussian Processes

نویسندگان

چکیده

In addition to recent developments in computing speed and memory, methodological advances have contributed significant gains the performance of stochastic simulation. this paper, we focus on variance reduction for matrix computations via factorization. We provide insights into existing methods estimating entries large matrices. Popular do not exploit that is possible when factorized. show how square root factorization can achieve some important cases arbitrarily better performance. addition, propose a factorized estimator trace product matrices numerically demonstrate be up 1,000 times more efficient certain problems log-likelihood Gaussian process. Additionally, new log-determinant positive semi-definite where treated as normalizing constant probability density.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Using Gaussian Processes for Variance Reduction in Policy Gradient Algorithms*

Gradient based policy optimization algorithms suffer from high gradient variance, this is usually the result of using Monte Carlo estimates of the Qvalue function in the gradient calculation. By replacing this estimate with a function approximator on state-action space, the gradient variance can be reduced significantly. In this paper we present a method for the training of a Gaussian Process t...

متن کامل

Expectiles for subordinated Gaussian processes with applications

In this paper, we introduce a new class of estimators of the Hurst exponent of the fractional Brownian motion (fBm) process. These estimators are based on sample expectiles of discrete variations of a sample path of the fBm process. In order to derive the statistical properties of the proposed estimators, we establish asymptotic results for sample expectiles of subordinated stationary Gaussian ...

متن کامل

Applications of Matrix Computations to Search Engines

The central part of the algorithm deals with computing a dominant eigenvector (called the PageRank vector) of a huge stochastic irreducible matrix (the number of columns is in the order of billions). For practical and theoretical reasons the original matrix is modified or perturbed by introducing a so-called personalization vector, which allows to manipulate not only the ranks of particular Web...

متن کامل

The Rate of Entropy for Gaussian Processes

In this paper, we show that in order to obtain the Tsallis entropy rate for stochastic processes, we can use the limit of conditional entropy, as it was done for the case of Shannon and Renyi entropy rates. Using that we can obtain Tsallis entropy rate for stationary Gaussian processes. Finally, we derive the relation between Renyi, Shannon and Tsallis entropy rates for stationary Gaussian proc...

متن کامل

Moving Average Processes with Infinite Variance

The sample autocorrelation function (acf) of a stationary process has played a central statistical role in traditional time series analysis, where the assumption is made that the marginal distribution has a second moment. Now, the classical methods based on acf are not applicable in heavy tailed modeling. Using the codifference function as dependence measure for such processes be shown it be as...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2021

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-030-92511-6_16